positive feedback
We thank reviewers for positive feedback, mentioning DTSIL as an effective novel method (R2,3,4) for a significant
We will incorporate the suggestions. More details were provided in Appendix B.1, especially We will add these pointers and more descriptions in main text to clarify our algorithm. We will make the connection between DTSIL and prior works more clear, especially for imitation learning part. Pseudocode for organizing clusters was in Appendix A.3. DTSIL+EXP without SL performs worse on Montezuma's Revenge Assume agent's location in state embeddings is normalized to We will add this comparison and more discussions about off-policy and model-based exploration methods.
positive feedback, and greatly appreciate the critical and constructive suggestions
Thank you for your valuable feedback, which is very helpful in improving the paper. We're encouraged by the broadly "Put this in the context of other work on computational homogenization / multi-scale finite element Our method is related to these and the boundary element method (BEM). "Limitation associated with micro-scale buckling... the coarse-grain behavior might exhibit hysteretic effects": Good "How sensitive is the outer optimization to the accuracy of the surrogate gradients?" "Do you know how the CES method scales with system size in terms of accuracy and evaluation time": In terms of "the method to solve the outer optimization over BCs to find minimum energy solutions to the composed surrogates Free DoFs are optimized to minimize total predicted energy using LBFGS. "The discuss of the surrogate and i.i.d. "Are the BCs shared when a boundary is common between two cells": Y es. We have 1 DoF for each blue point in Fig 2. "Its not clear how the HMC and PDE solver are used together": HMC is used to generate training BCs, preferring larger The PDE solver is used to compute the gradient of the pdf (which depends on E) w.r.t. the BC. Given BCs, we run the solver to determine the internal u and E. We compute dE/dBC with the Then we use this to compute the gradient of the pdf w.r.t. the BCs, needed for the leapfrog step. "does the HMC require a significant burn-in time before producing reasonable samples": No. Note: we don't truly care Per appendix, HMC took between 3 and 100 leapfrog steps per sample. The process of using the surrogates to solve the original problem can be explained in more detail. Newton method is neither the fast nor the most stable... a comparison with more sophisticated methods would be From a brief look it looks like Liu et al's method is tailored for Reviewer 5: "There is one outlier in L2 compression that was quite bad": We will discuss this in the main paper. "A comment might help the reader situate this work within the more usual (less idyllic) context of approximating This is a good suggestion: we will relate to other work in learning energies.
07cb5f86508f146774a2fac4373a8e50-AuthorFeedback.pdf
All error bars indicate 95% confidence intervals. We thank the reviewers for their positive comments and useful suggestions. "Are there cases where ERM outperforms We took this as a primitive and did not evaluate it in our submission. Note that as we reduce number of batches, MOM approaches ERM. Weakness 1: "Figure 1 should indicate MOM consistently achieve good reconstruction?
- Research Report > New Finding (0.51)
- Research Report > Experimental Study (0.36)
our responses to individual questions and comments. 3 Reviewer # 1 4 We thank for your positive feedback
We really appreciate the time and expertise you have invested in these reviews. We thank for your positive feedback. It is still possible that there is another method to prove the result for regression. Presentation of Algorithm 2: We will make Algorithm 2 more formal and make the proof of Theorem 8 more readable. A multi-class classification algorithm based on ordinal regression machine.) Thanks for raising this issue, and we will update Section 2.3 to clarify that the Realizable or agnostic settinig: For the sake of clear presentation, we only discussed the realizable setting in the paper.